122 research outputs found

    Fog computing enabled cost-effective distributed summarization of surveillance videos for smart cities

    Full text link
    [EN] Fog computing is emerging an attractive paradigm for both academics and industry alike. Fog computing holds potential for new breeds of services and user experience. However, Fog computing is still nascent and requires strong groundwork to adopt as practically feasible, cost-effective, efficient and easily deployable alternate to currently ubiquitous cloud. Fog computing promises to introduce cloud-like services on local network while reducing the cost. In this paper, we present a novel resource efficient framework for distributed video summarization over a multi-region fog computing paradigm. The nodes of the Fog network is based on resource constrained device Raspberry Pi. Surveillance videos are distributed on different nodes and a summary is generated over the Fog network, which is periodically pushed to the cloud to reduce bandwidth consumption. Different realistic workload in the form of a surveillance videos are used to evaluate the proposed system. Experimental results suggest that even by using an extremely limited resource, single board computer, the proposed framework has very little overhead with good scalability over off-the-shelf costly cloud solutions, validating its effectiveness for IoT-assisted smart cities. (C) 2018 Elsevier Inc. All rights reserved.Nasir, M.; Muhammad, K.; Lloret, J.; Sangaiah, AK.; Sajjad, M. (2019). Fog computing enabled cost-effective distributed summarization of surveillance videos for smart cities. Journal of Parallel and Distributed Computing. 126:161-170. https://doi.org/10.1016/j.jpdc.2018.11.004S16117012

    An energy-efficient off-loading scheme for low latency in collaborative edge computing

    Get PDF
    Mobile terminal users applications, such as smartphones or laptops, have frequent computational task demanding but limited battery power. Edge computing is introduced to offload terminals' tasks to meet the quality of service requirements such as low delay and energy consumption. By offloading computation tasks, edge servers can enable terminals to collaboratively run the highly demanding applications in acceptable delay requirements. However, existing schemes barely consider the characteristics of the edge server, which leads to random assignment of tasks among servers and big tasks with high computational intensity (named as “big task”) may be assigned to servers with low ability. In this paper, a task is divided into several subtasks and subtasks are offloaded according to characteristics of edge servers, such as transmission distance and central processing unit (CPU) capacity. With this multi-subtasks-to-multi-servers model, an adaptive offloading scheme based on Hungarian algorithm is proposed with low complexity. Extensive simulations are conducted to show the efficiency of the scheme on reducing the offloading latency with low energy consumption

    A Q-learning-based approach for deploying dynamic service function chains

    Get PDF
    As the size and service requirements of today’s networks gradually increase, large numbers of proprietary devices are deployed, which leads to network complexity, information security crises and makes network service and service provider management increasingly difficult. Network function virtualization (NFV) technology is one solution to this problem. NFV separates network functions from hardware and deploys them as software on a common server. NFV can be used to improve service flexibility and isolate the services provided for each user, thus guaranteeing the security of user data. Therefore, the use of NFV technology includes many problems worth studying. For example, when there is a free choice of network path, one problem is how to choose a service function chain (SFC) that both meets the requirements and offers the service provider maximum profit. Most existing solutions are heuristic algorithms with high time efficiency, or integer linear programming (ILP) algorithms with high accuracy. It’s necessary to design an algorithm that symmetrically considers both time efficiency and accuracy. In this paper, we propose the Q-learning Framework Hybrid Module algorithm (QLFHM), which includes reinforcement learning to solve this SFC deployment problem in dynamic networks. The reinforcement learning module in QLFHM is responsible for the output of alternative paths, while the load balancing module in QLFHM is responsible for picking the optimal solution from them. The results of a comparison simulation experiment on a dynamic network topology show that the proposed algorithm can output the approximate optimal solution in a relatively short time while also considering the network load balance. Thus, it achieves the goal of maximizing the benefit to the service provider

    Predictive modelling of building energy consumption based on a hybrid nature-inspired optimization algorithm

    Get PDF
    Overall energy consumption has expanded over the previous decades because of rapid population, urbanization and industrial growth rates. The high demand for energy leads to higher cost per unit of energy, which, can impact on the running costs of commercial and residential dwellings. Hence, there is a need for more effective predictive techniques that can be used to measure and optimize energy usage of large arrays of connected Internet of Things (IoT) devices and control points that constitute modern built environments. In this paper, we propose a lightweight IoT framework for predicting energy usage at a localized level for optimal configuration of building-wide energy dissemination policies. Autoregressive Integrated Moving Average (ARIMA) as a statistical liner model could be used for this purpose; however, it is unable to model the dynamic nonlinear relationships in nonstationary fluctuating power consumption data. Therefore, we have developed an improved hybrid model based on the ARIMA, Support Vector Regression (SVRs) and Particle Swarm Optimization (PSO) to predict precision energy usage from supplied data. The proposed model is evaluated using power consumption data acquired from environmental actuator devices controlling a large functional space in a building. Results show that the proposed hybrid model out-performs other alternative techniques in forecasting power consumption. The approach is appropriate in building energy policy implementations due to its precise estimations of energy consumption and lightweight monitoring infrastructure which can lead to reducing the cost on energy consumption. Moreover, it provides an accurate tool to optimize the energy consumption strategies in wider built environments such as smart cities

    An IoE Blockchain-Based Network Knowledge Management Model for Resilient Disaster Frameworks

    Get PDF
    The disaster area is a constantly changing environment, which can make it challenging to distribute supplies effectively. The lack of accurate information about the required goods and potential bottlenecks in the distribution process can be detrimental. The success of a response network is dependent on collaboration, coordination, sovereignty, and equal distribution of relief resources. To facilitate these interactions and improve knowledge of supply chain operations, a reliable and dynamic logistic system is essential. This study proposes the integration of blockchain technology, the Internet of Things (IoT), and the Internet of Everything (IoE) into the disaster management structure. The proposed disaster response model aims to reduce response times and ensure the secure and timely distribution of goods. The hyper-connected disaster supply network is modeled through a concrete implementation on the Network Simulation (NS2) platform. The simulation results demonstrate that the proposed method yields significant improvements in several key performance metrics. Specifically, it achieved more than a 30% improvement in the successful migration of tasks, a 17% reduction in errors, a 15% reduction in delays, and a 9% reduction in energy consumption

    An intelligent healthcare system for detection and classification to discriminate vocal fold disorders

    Get PDF
    The growing population of senior citizens around the world will appear as a big challenge in the future and they will engage a significant portion of the healthcare facilities. Therefore, it is necessary to develop intelligent healthcare systems so that they can be deployed in smart homes and cities for remote diagnosis. To overcome the problem, an intelligent healthcare system is proposed in this study. The proposed intelligent system is based on the human auditory mechanism and capable of detection and classification of various types of the vocal fold disorders. In the proposed system, critical bandwidth phenomena by using the bandpass filters spaced over Bark scale is implemented to simulate the human auditory mechanism. Therefore, the system acts like an expert clinician who can evaluate the voice of a patient by auditory perception. The experimental results show that the proposed system can detect the pathology with an accuracy of 99.72%. Moreover, the classification accuracy for vocal fold polyp, keratosis, vocal fold paralysis, vocal fold nodules, and adductor spasmodic dysphonia is 97.54%, 99.08%, 96.75%, 98.65%, 95.83%, and 95.83%, respectively. In addition, an experiment for paralysis versus all other disorders is also conducted, and an accuracy of 99.13% is achieved. The results show that the proposed system is accurate and reliable in vocal fold disorder assessment and can be deployed successfully for remote diagnosis. Moreover, the performance of the proposed system is better as compared to existing disorder assessment systems

    IoT Resource Allocation and Optimization Based on Heuristic Algorithm

    No full text
    The Internet of Things (IoT) is a distributed system that connects everything via internet. IoT infrastructure contains multiple resources and gateways. In such a system, the problem of optimizing IoT resource allocation and scheduling (IRAS) is vital, because resource allocation (RA) and scheduling deals with the mapping between recourses and gateways and is also responsible for optimally allocating resources to available gateways. In the IoT environment, a gateway may face hundreds of resources to connect. Therefore, manual resource allocation and scheduling is not possible. In this paper, the whale optimization algorithm (WOA) is used to solve the RA problem in IoT with the aim of optimal RA and reducing the total communication cost between resources and gateways. The proposed algorithm has been compared to the other existing algorithms. Results indicate the proper performance of the proposed algorithm. Based on various benchmarks, the proposed method, in terms of “total communication cost”, is better than other ones

    A review on object detection in unmanned aerial vehicle surveillance

    No full text
    Purpose: Computer vision in drones has gained a lot of attention from artificial intelligence researchers. Providing intelligence to drones will resolve many real-time problems. Computer vision tasks such as object detection, object tracking, and object counting are significant tasks for monitoring specified environments. However, factors such as altitude, camera angle, occlusion, and motion blur make it a more challenging task. Methodology: In this paper, a detailed literature review has been conducted focusing on object detection and tracking using UAVs concerning different applications. This study summarizes the findings of existing research papers and identifies the research gaps. Contribution: Object detection methods applied in UAV images are classified and elaborated. UAV datasets specific to object detection tasks are listed. Existing research works in different applications are summarized. Finally, a secure onboard processing system on a robust object detection framework in precision agriculture is proposed to mitigate identified research gaps
    corecore